Package | bullseye/v11 | bookworm/v12 |
---|---|---|
ansible | 2.10.7 | 2.14.3 |
apache | 2.4.56 | 2.4.57 |
apt | 2.2.4 | 2.6.1 |
bash | 5.1 | 5.2.15 |
ceph | 14.2.21 | 16.2.11 |
docker | 20.10.5 | 20.10.24 |
dovecot | 2.3.13 | 2.3.19 |
dpkg | 1.20.12 | 1.21.22 |
emacs | 27.1 | 28.2 |
gcc | 10.2.1 | 12.2.0 |
git | 2.30.2 | 2.39.2 |
golang | 1.15 | 1.19 |
libc | 2.31 | 2.36 |
linux kernel | 5.10 | 6.1 |
llvm | 11.0 | 14.0 |
lxc | 4.0.6 | 5.0.2 |
mariadb | 10.5 | 10.11 |
nginx | 1.18.0 | 1.22.1 |
nodejs | 12.22 | 18.13 |
openjdk | 11.0.18 + 17.0.6 | 17.0.6 |
openssh | 8.4p1 | 9.2p1 |
openssl | 1.1.1n | 3.0.8-1 |
perl | 5.32.1 | 5.36.0 |
php | 7.4+76 | 8.2+93 |
podman | 3.0.1 | 4.3.1 |
postfix | 3.5.18 | 3.7.5 |
postgres | 13 | 15 |
puppet | 5.5.22 | 7.23.0 |
python2 | 2.7.18 | (gone!) |
python3 | 3.9.2 | 3.11.2 |
qemu/kvm | 5.2 | 7.2 |
ruby | 2.7+2 | 3.1 |
rust | 1.48.0 | 1.63.0 |
samba | 4.13.13 | 4.17.8 |
systemd | 247.3 | 252.6 |
unattended-upgrades | 2.8 | 2.9.1 |
util-linux | 2.36.1 | 2.38.1 |
vagrant | 2.2.14 | 2.3.4 |
vim | 8.2.2434 | 9.0.1378 |
zsh | 5.8 | 5.9 |
--fsync
: fsync every written file--old-dirs
: works like dirs when talking to old rsync--old-args
: disable the modern arg-protection idiom--secluded-args, -s
: use the protocol to safely send the args (replaces protect-args option)--trust-sender
: trust the remote sender s file listContent Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified
Content Security Policy: The page s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2 ( style-src ). data:44:30
Content Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified
TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118
What is being done is dom.webaudio.enabled being disabled in Firefox.
Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it s labelled NSFW although to my mind there wasn t anything remotely NSFW about it. They do mention using another tool AudioContext Fingerprint Defender which supposedly fakes or spoofs an id. As this add-on isn t tracked by Firefox privacy team it s hard for me to say anything positive or negative.
So, in the end I stopped using discord as the alternative was being tracked by them
Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.
AMD Issues It s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever
kubectl
may have a completely different impact on the API depending on
usage, for example when listing the whole list of objects (very expensive) vs a single object.
The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which
avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get
calls). I
already knew some of this, and for example the jobs-framework-emailer was already making use of this
ResourceVersion functionality.
There have been a lot of improvements in the performance side of Kubernetes in recent times, or more
specifically, in how resources are managed and used by the system. I saw a review of resource management from
the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware
scheduling decisions and dynamic resource claims (changing the pod resource claims without
re-defining/re-starting the pods).
On cluster management, bootstrapping and multi-tenancy
I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers
themselves. This was of interest to me because as of today we use it for
Toolforge. They shared all
the latest developments and improvements, and the plans and roadmap for the future, with a special mention to
something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing
certificates and such.
I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was
the best, from the point of view of being a well established and well-known workflow, plus having a very
active contributor base. The kubeadm developers invited the audience to submit feature requests,
so I did.
The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any
serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions
and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster
was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression
that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play
with it sometime down the road.
On networking
I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the
network to achieve its goal. The conference program had sessions to cover topics ranging from network
debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a
reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I
believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was
that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering
it is the one we use for Toolforge Kubernetes.
On jobs
I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed
by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning
models and other fancy AI projects.
It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some
limitations that are now gone, or at least greatly improved. Some new functionalities have been added as
well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger
batch of data based on that index. It would allow for full grid-like features like sequential (or again,
indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that
something we would like to enable in Toolforge Jobs Framework?
On policy and security
A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice
to learn two realities:
$ sudo hwinfo --monitor grep Model Model: "PHILIPS PHL 221S8L"
FWIW hwinfo is the latest version
~$ sudo hwinfo --version21.82
I did see couple of movies before starting to write this blog post. Not an exceptional monitor but better than before. I had option from three brands, Dell (most expensive), Philips (middle) and & LG (lowest in prices). Interestingly, Viewsonic disappeared from the market about 5 years back and made a comeback just couple of years ago. Even Philips which had exited the PC Monitor almost a decade back re-entered the market. Apart from the branding, it doesn t make much of a difference as almost all the products including the above monitors are produced in China. I did remember her a lot while buying the monitor as I m sure she would have enjoyed it far more than me but that was not to be
lintian
from sid
, you can define a job to do so:
lintian: runs-on: ubuntu-latest container: debian:sid steps: - [ do something to get a package to run lintian on ] - run: apt-get update - run: apt-get install -y --no-install-recommends lintian - run: lintian --info --display-info *.changes
latest
right now means 22.04 for GitHub), but then use Docker to run the debian:sid
container and execute all further steps inside it.
Pretty short and straight forward, right?
Now lintian
does static analysis of the package, it doesn't need to install it.
What if we want to run autopkgtest
that performs tests on an actually installed package?
autopkgtest
comes with various "virt servers", which are providing isolation of the testbed, so that it does not interfere with the host system.
The simplest available virt server, autopkgtest-virt-null
doesn't actually provide any isolation, as it runs things directly on the host system.
This might seem fine when executed inside an ephemeral container in an CI environment, but it also means that multiple tests have the potential to influence each other as there is no way to revert the testbed to a clean state.
For that, there are other, "real", virt servers available: chroot
, lxc
, qemu
, docker
and many more.
They all have one in common: to use them, one needs to somehow provide an "image" (a prepared chroot, a tarball of a chroot, a vm disk, a container, , you get it) to operate on and most either bring a tool to create such an "image" or rely on a "registry" (online repository) to provide them.
Most users of autopkgtest
on GitHub (that I could find with their terrible search) are using either the null
or the lxd
virt servers. Probably because these are dead simple to set up (null
) or the most "native" (lxd
) in the Ubuntu environment.
As I wanted to execute multiple tests that for sure would interfere with each other, the null
virt server was out of the discussion pretty quickly.
The lxd
one also felt odd, as that meant I'd need to set up lxd (can be done in a few commands, but still) and it would need to download stuff from Canonical, incurring costs (which I couldn't care less about) and taking time which I do care about!).
Enter autopkgtest-virt-docker
, which recently was added to autopkgtest
! No need to set things up, as GitHub already did all the Docker setup for me, and almost no waiting time to download the containers, as GitHub does heavy caching of stuff coming from Docker Hub (or at least it feels like that).
The only drawback? It was added in autopkgtest
5.23, which Ubuntu 22.04 doesn't have.
"We need to go deeper" and run autopkgtest
from a sid
container!
With this idea, our current job definition would look like this:
autopkgtest: runs-on: ubuntu-latest container: debian:sid steps: - [ do something to get a package to run autopkgtest on ] - run: apt-get update - run: apt-get install -y --no-install-recommends autopkgtest autodep8 docker.io - run: autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid
--setup-commands="apt-get update"
is needed as the container comes with an empty apt
cache and wouldn't be able to find dependencies of the tested package)
However, this will fail:
# autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid autopkgtest [10:20:54]: starting date and time: 2023-04-07 10:20:54+0000 autopkgtest [10:20:54]: version 5.28 autopkgtest [10:20:54]: host a82a11789c0d; command line: /usr/bin/autopkgtest bley_2.0.0-1_amd64.changes '--setup-commands=apt-get update' -- docker debian:sid Unexpected error: Traceback (most recent call last): File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 829, in mainloop command() File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 758, in command r = f(c, ce) ^^^^^^^^ File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 692, in cmd_copydown copyupdown(c, ce, False) File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 580, in copyupdown copyupdown_internal(ce[0], c[1:], upp) File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 607, in copyupdown_internal copydown_shareddir(sd[0], sd[1], dirsp, downtmp_host) File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 562, in copydown_shareddir shutil.copy(host, host_tmp) File "/usr/lib/python3.11/shutil.py", line 419, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/usr/lib/python3.11/shutil.py", line 258, in copyfile with open(dst, 'wb') as fdst: ^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest-virt-docker.shared.kn7n9ioe/downtmp/wrapper.sh' autopkgtest [10:21:07]: ERROR: testbed failure: unexpected eof from the testbed
autopkgtest-virt-docker
tries to use a shared directory (using Dockers --volume
) to exchange things with the testbed (for the downtmp-host
capability). As my autopkgtest
is running inside a container itself, nothing it tells the Docker deamon to mount will be actually visible to it.
In retrospect this makes total sense and autopkgtest-virt-docker
has a switch to "fix" the issue: --remote
as the Docker deamon is technically remote when viewed from the place autopkgtest
runs at.
I'd argue this is not a bug in autopkgtest(-virt-docker)
, as the situation is actually cared for. There is even some auto-detection of "remote" daemons in the code, but that doesn't "know" how to detect the case where the daemon socket is mounted (vs being set as an environment variable). I've opened an MR (assume remote docker when running inside docker) which should detect the case of running inside a Docker container which kind of implies the daemon is remote.
Not sure the patch will be accepted (it is a band-aid after all), but in the meantime I am quite happy with using --remote
and so could you ;-)
Series: | Discworld #26 |
Publisher: | Harper |
Copyright: | May 2001 |
Printing: | August 2014 |
ISBN: | 0-06-230739-8 |
Format: | Mass market |
Pages: | 420 |
clang++-16
have found a fan in Brian
Ripley, and so he sent us a note. And as the issue was trivially
reproducible with clang++-15
here too I had it fixed in no
time. And both changes taken together form the incremental 0.2.7
release.
RcppSMC
provides Rcpp-based bindings to R for the Sequential Monte Carlo
Template Classes (SMCTC) by Adam Johansen described in his JSS article.
Sequential Monte Carlo is also referred to as Particle Filter
in some contexts. The package now also features the Google Summer of Code
work by Leah South in 2017, and by Ilya Zarubin in 2021.
The release is summarized below.
Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in RcppSMC version 0.2.7 (2023-03-22)
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Next.